Convergence of large-deviation estimators

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Functional Large Deviation

We establish functional large deviation principles (FLDPs) for waiting and departure processes in single-server queues with unlimited waiting space and the rst-in rst-out service discipline. We apply the extended contraction principle to show that these processes obey FLDPs in the function space D with one of the non-uniform Skorohod topologies whenever the arrival and service processes obey FL...

متن کامل

Large Deviation Theory

If we draw a random variable n times from Q, the probability distribution of the sum of the random variables is given by Q. This is the convolution of Q with itself n times. As n → ∞, Q tends to a normal distribution by the central limit theorem. This is shown in Figure 1. The top line is a computed normal distribution with the same mean as Q. However, as shown in Figure 3, when plotted on a lo...

متن کامل

About Multigrid Convergence of Some Length Estimators

An interesting property for curve length digital estimators is the convergence toward the continuous length and the associate convergence speed when the digitization step h tends to 0. On the one hand, it has been proved that the local estimators do not verify this convergence. On the other hand, DSS and MLP based estimators have been proved to converge but only under some convexity and smoothn...

متن کامل

Multigrid convergence of discrete geometric estimators

The analysis of digital shapes require tools to determine accurately their geometric characteristics. Their boundary is by essence discrete and is seen by continuous geometry as a jagged continuous curve, either straight or not derivable. Discrete geometric estimators are specific tools designed to determine geometric information on such curves. We present here global geometric estimators of ar...

متن کامل

On Convergence of Kernel Learning Estimators

The paper studies kernel regression learning from stochastic optimization and ill-posedness point of view. Namely, the convergence properties of kernel learning estimators are investigated under a gradual elimination of the regularization parameter with rising number of observations. We derive computable non-asymptotic bounds on the deviation of the expected risk from its best possible value an...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Physical Review E

سال: 2015

ISSN: 1539-3755,1550-2376

DOI: 10.1103/physreve.92.052104